148 research outputs found

    Parametric study of EEG sensitivity to phase noise during face processing

    Get PDF
    <b>Background: </b> The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model. <b>Results: </b> Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces. <b>Conclusion: </b> Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses

    Age-related delay in information accrual for faces: Evidence from a parametric, single-trial EEG approach

    Get PDF
    Background: In this study, we quantified age-related changes in the time-course of face processing by means of an innovative single-trial ERP approach. Unlike analyses used in previous studies, our approach does not rely on peak measurements and can provide a more sensitive measure of processing delays. Young and old adults (mean ages 22 and 70 years) performed a non-speeded discrimination task between two faces. The phase spectrum of these faces was manipulated parametrically to create pictures that ranged between pure noise (0% phase information) and the undistorted signal (100% phase information), with five intermediate steps. Results: Behavioural 75% correct thresholds were on average lower, and maximum accuracy was higher, in younger than older observers. ERPs from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The earliest age-related ERP differences occurred in the time window of the N170. Older observers had a significantly stronger N170 in response to noise, but this age difference decreased with increasing phase information. Overall, manipulating image phase information had a greater effect on ERPs from younger observers, which was quantified using a hierarchical modelling approach. Importantly, visual activity was modulated by the same stimulus parameters in younger and older subjects. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed a significantly slower processing in older observers starting around 120 ms after stimulus onset. This age-related delay increased over time to reach a maximum around 190 ms, at which latency younger observers had around 50 ms time lead over older observers. Conclusion: Using a component-free ERP analysis that provides a precise timing of the visual system sensitivity to image structure, the current study demonstrates that older observers accumulate face information more slowly than younger subjects. Additionally, the N170 appears to be less face-sensitive in older observers

    Deceptive body movements reverse spatial cueing in soccer

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.The purpose of the experiments was to analyse the spatial cueing effects of the movements of soccer players executing normal and deceptive (step-over) turns with the ball. Stimuli comprised normal resolution or point-light video clips of soccer players dribbling a football towards the observer then turning right or left with the ball. Clips were curtailed before or on the turn (-160, -80, 0 or +80 ms) to examine the time course of direction prediction and spatial cueing effects. Participants were divided into higher-skilled (HS) and lower-skilled (LS) groups according to soccer experience. In experiment 1, accuracy on full video clips was higher than on point-light but results followed the same overall pattern. Both HS and LS groups correctly identified direction on normal moves at all occlusion levels. For deceptive moves, LS participants were significantly worse than chance and HS participants were somewhat more accurate but nevertheless substantially impaired. In experiment 2, point-light clips were used to cue a lateral target. HS and LS groups showed faster reaction times to targets that were congruent with the direction of normal turns, and to targets incongruent with the direction of deceptive turns. The reversed cueing by deceptive moves coincided with earlier kinematic events than cueing by normal moves. It is concluded that the body kinematics of soccer players generate spatial cueing effects when viewed from an opponent's perspective. This could create a reaction time advantage when anticipating the direction of a normal move. A deceptive move is designed to turn this cueing advantage into a disadvantage. Acting on the basis of advance information, the presence of deceptive moves primes responses in the wrong direction, which may be only partly mitigated by delaying a response until veridical cues emerge

    Longer fixation duration while viewing face images

    Get PDF
    The spatio-temporal properties of saccadic eye movements can be influenced by the cognitive demand and the characteristics of the observed scene. Probably due to its crucial role in social communication, it is argued that face perception may involve different cognitive processes compared with non-face object or scene perception. In this study, we investigated whether and how face and natural scene images can influence the patterns of visuomotor activity. We recorded monkeys’ saccadic eye movements as they freely viewed monkey face and natural scene images. The face and natural scene images attracted similar number of fixations, but viewing of faces was accompanied by longer fixations compared with natural scenes. These longer fixations were dependent on the context of facial features. The duration of fixations directed at facial contours decreased when the face images were scrambled, and increased at the later stage of normal face viewing. The results suggest that face and natural scene images can generate different patterns of visuomotor activity. The extra fixation duration on faces may be correlated with the detailed analysis of facial features

    Gender differences in hemispheric asymmetry for face processing

    Get PDF
    BACKGROUND: Current cognitive neuroscience models predict a right-hemispheric dominance for face processing in humans. However, neuroimaging and electromagnetic data in the literature provide conflicting evidence of a right-sided brain asymmetry for decoding the structural properties of faces. The purpose of this study was to investigate whether this inconsistency might be due to gender differences in hemispheric asymmetry. RESULTS: In this study, event-related brain potentials (ERPs) were recorded in 40 healthy, strictly right-handed individuals (20 women and 20 men) while they observed infants' faces expressing a variety of emotions. Early face-sensitive P1 and N1 responses to neutral vs. affective expressions were measured over the occipital/temporal cortices, and the responses were analyzed according to viewer gender. Along with a strong right hemispheric dominance for men, the results showed a lack of asymmetry for face processing in the amplitude of the occipito-temporal N1 response in women to both neutral and affective faces. CONCLUSION: Men showed an asymmetric functioning of visual cortex while decoding faces and expressions, whereas women showed a more bilateral functioning. These results indicate the importance of gender effects in the lateralization of the occipito-temporal response during the processing of face identity, structure, familiarity, or affective content

    Top-down and bottom-up modulation in processing bimodal face/voice stimuli

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Processing of multimodal information is a critical capacity of the human brain, with classic studies showing bimodal stimulation either facilitating or interfering in perceptual processing. Comparing activity to congruent and incongruent bimodal stimuli can reveal sensory dominance in particular cognitive tasks.</p> <p>Results</p> <p>We investigated audiovisual interactions driven by stimulus properties (bottom-up influences) or by task (top-down influences) on congruent and incongruent simultaneously presented faces and voices while ERPs were recorded. Subjects performed gender categorisation, directing attention either to faces or to voices and also judged whether the face/voice stimuli were congruent in terms of gender. Behaviourally, the unattended modality affected processing in the attended modality: the disruption was greater for attended voices. ERPs revealed top-down modulations of early brain processing (30-100 ms) over unisensory cortices. No effects were found on N170 or VPP, but from 180-230 ms larger right frontal activity was seen for incongruent than congruent stimuli.</p> <p>Conclusions</p> <p>Our data demonstrates that in a gender categorisation task the processing of faces dominate over the processing of voices. Brain activity showed different modulation by top-down and bottom-up information. Top-down influences modulated early brain activity whereas bottom-up interactions occurred relatively late.</p

    Impaired perception of facial motion in autism spectrum disorder

    Get PDF
    Copyright: © 2014 O’Brien et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.This article has been made available through the Brunel Open Access Publishing Fund.Facial motion is a special type of biological motion that transmits cues for socio-emotional communication and enables the discrimination of properties such as gender and identity. We used animated average faces to examine the ability of adults with autism spectrum disorders (ASD) to perceive facial motion. Participants completed increasingly difficult tasks involving the discrimination of (1) sequences of facial motion, (2) the identity of individuals based on their facial motion and (3) the gender of individuals. Stimuli were presented in both upright and upside-down orientations to test for the difference in inversion effects often found when comparing ASD with controls in face perception. The ASD group’s performance was impaired relative to the control group in all three tasks and unlike the control group, the individuals with ASD failed to show an inversion effect. These results point to a deficit in facial biological motion processing in people with autism, which we suggest is linked to deficits in lower level motion processing we have previously reported

    Comparison of choose-a-movie and approach-avoidance paradigms used to measure social motivation

    Get PDF
    Social motivation is a subjective state which is rather difficult to quantify. It has sometimes been conceptualised as “behavioural effort” to seek social contact. Two paradigms: approach–avoidance (AA) and choose a movie (CAM), based on the same conceptualisation, have been used to measure social motivation in people with and without autism. However, in absence of a direct comparison, it is hard to know which of these paradigms has higher sensitivity in estimating preference for social over non-social stimuli. Here we compare these two tasks for their utility in (1) evaluating social seeking in typical people and (2) identifying the influence of autistic traits on social motivation. Our results suggest that CAM reveals a clear preference for social stimuli over non-social in typical adults but AA fails to do so. Also, social seeking measured with CAM but not AA has a negative relationship between autistic traits

    From upright to upside-down presentation: A spatio-temporal ERP study of the parametric effect of rotation on face and house processing

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>While there is a general agreement that picture-plane inversion is more detrimental to face processing than to other seemingly complex visual objects, the origin of this effect is still largely debatable. Here, we address the question of whether face inversion reflects a quantitative or a qualitative change in processing mode by investigating the pattern of event-related potential (ERP) response changes with picture plane rotation of face and house pictures. Thorough analyses of topographical (Scalp Current Density maps, SCD) and dipole source modeling were also conducted.</p> <p>Results</p> <p>We find that whilst stimulus orientation affected in a similar fashion participants' response latencies to make face and house decisions, only the ERPs in the N170 latency range were modulated by picture plane rotation of faces. The pattern of N170 amplitude and latency enhancement to misrotated faces displayed a curvilinear shape with an almost linear increase for rotations from 0° to 90° and a dip at 112.5° up to 180° rotations. A similar discontinuity function was also described for SCD occipito-temporal and temporal current foci with no topographic distribution changes, suggesting that upright and misrotated faces activated similar brain sources. This was confirmed by dipole source analyses showing the involvement of bilateral sources in the fusiform and middle occipital gyri, the activity of which was differentially affected by face rotation.</p> <p>Conclusion</p> <p>Our N170 findings provide support for both the quantitative and qualitative accounts for face rotation effects. Although the qualitative explanation predicted the curvilinear shape of N170 modulations by face misrotations, topographical and source modeling findings suggest that the same brain regions, and thus the same mechanisms, are probably at work when processing upright and rotated faces. Taken collectively, our results indicate that the same processing mechanisms may be involved across the whole range of face orientations, but would operate in a non-linear fashion. Finally, the response tuning of the N170 to rotated faces extends previous reports and further demonstrates that face inversion affects perceptual analyses of faces, which is reflected within the time range of the N170 component.</p
    corecore